自闭症谱系障碍(ASD)是一种脑部疾病,其特征是幼儿时期出现的各种体征和症状。 ASD还与受影响个体的沟通缺陷和重复行为有关。已经开发了各种ASD检测方法,包括神经影像学和心理测试。在这些方法中,磁共振成像(MRI)成像方式对医生至关重要。临床医生依靠MRI方式准确诊断ASD。 MRI模态是非侵入性方法,包括功能(fMRI)和结构(SMRI)神经影像学方法。但是,用fMRI和SMRI诊断为专家的ASD的过程通常很费力且耗时。因此,已经开发了基于人工智能(AI)的几种计算机辅助设计系统(CAD)来协助专家医生。传统的机器学习(ML)和深度学习(DL)是用于诊断ASD的最受欢迎的AI方案。这项研究旨在使用AI审查对ASD的自动检测。我们回顾了使用ML技术开发的几个CAD,以使用MRI模式自动诊断ASD。在使用DL技术来开发ASD的自动诊断模型方面的工作非常有限。附录中提供了使用DL开发的研究摘要。然后,详细描述了使用MRI和AI技术在自动诊断ASD的自动诊断期间遇到的挑战。此外,讨论了使用ML和DL自动诊断ASD的研究的图形比较。最后,我们提出了使用AI技术和MRI神经影像学检测ASD的未来方法。
translated by 谷歌翻译
癫痫发作是最重要的神经障碍之一,其早期诊断将有助于临床医生为患者提供准确的治疗方法。脑电图(EEG)信号广泛用于癫痫癫痫发作检测,其提供了关于大脑功能的实质性信息的专家。本文介绍了采用模糊理论和深层学习技术的新型诊断程序。所提出的方法在Bonn大学数据集上进行了评估,具有六个分类组合以及弗赖堡数据集。可以使用可调谐Q小波变换(TQWT)来将EEG信号分解为不同的子带。在特征提取步骤中,从TQWT的不同子带计算了13个不同的模糊熵,并且计算它们的计算复杂性以帮助研究人员选择各种任务的最佳集合。在下文中,采用具有六层的AutoEncoder(AE)用于减少维数。最后,标准自适应神经模糊推理系统(ANFIS)以及其具有蚱蜢优化算法(ANFIS-GOA),粒子群优化(ANFIS-PSO)和育种群优化(ANFIS-BS)方法的变体分类。使用我们所提出的方法,ANFIS-BS方法在弗赖堡数据集上分为两类分为两类和准确度,在两类分类中获得99.46%的准确性,以及弗赖堡数据集的99.28%,达到最先进的两个人的表演。
translated by 谷歌翻译
精神分裂症(SZ)是一种精神障碍,由于大脑中特定化学品的分泌,一些脑区的功能失去平衡,导致思想,行动和情绪之间缺乏协调。本研究提供了通过脑电图(EEG)信号的自动化SZ诊断的各种智能深度学习(DL)方法。将得到的结果与传统智能方法的结果进行比较。为了实施拟议的方法,已经使用了波兰华沙精神病学与神经学研究所的数据集。首先,将EEG信号分成25秒的时间框架,然后通过Z分数或标准L2标准化。在分类步骤中,考虑通过EEG信号考虑两种不同的方法进行SZ诊断。在该步骤中,首先通过传统的机器学习方法进行EEG信号的分类,例如,支持向量机,K-CORMONT邻居,决策树,NA \“IVE贝叶斯,随机森林,极其随机树木和袋装。各种提出的DL模型,即长的短期存储器(LSTMS),一维卷积网络(1D-CNNS)和1D-CNN-LSTMS。在此步骤中,实现并比较了DL模型具有不同的激活功能。在提议的DL模型中,CNN-LSTM架构具有最佳性能。在这种架构中,使用具有Z分数和L2组合标准化的Relu激活功能。所提出的CNN-LSTM模型具有达到99.25%的准确度,比该领域的大多数前研究的结果更好。值得一提的是,为了执行所有模拟,已经使用了具有k = 5的k折叠交叉验证方法。
translated by 谷歌翻译
由于癫痫发生是由于大脑的异常活性引起的,因此癫痫发作会影响您的大脑处理的任何过程。癫痫发作的一些体征和症状包括混乱,异常凝视以及快速,突然和无法控制的手动运动。癫痫发作检测方法涉及神经检查,血液检查,神经心理学检查和神经影像学方法。其中,神经影像学的方式受到了专业医生的极大关注。一种促进癫痫发作准确,快速诊断的方法是基于深度学习(DL)和神经成像方式采用计算机辅助诊断系统(CADS)。本文研究了利用神经影像学方式利用用于癫痫发作检测和预测的DL方法的全面概述。首先,讨论了用于使用神经影像模式的癫痫发作检测和预测的基于DL的CAD。此外,还包括了用于癫痫发作检测和预测的各种数据集的描述,预处理算法和DL模型。然后,已经介绍了有关康复工具的研究,其中包含脑部计算机接口(BCI),可植入,云计算,物联网(IoT),在现场可编程栅极阵列(FPGA)上的DL技术实现,等等。讨论部分是关于癫痫发作检测和预测研究之间的比较。使用神经影像模式和DL模型的癫痫发作检测和预测中最重要的挑战。此外,已经提出了数据集,DL,康复和硬件模型领域的未来工作建议。最后一部分致力于结论,并在该领域结合了最重要的发现。
translated by 谷歌翻译
准确诊断自闭症谱系障碍(ASD),随后有效康复对该疾病的管理至关重要。人工智能(AI)技术可以帮助医生应用自动诊断和康复程序。 AI技术包括传统机器学习(ML)方法和深度学习(DL)技术。常规ML方法采用各种特征提取和分类技术,但在DL中,特征提取和分类过程是智能的,一体地完成的。诊断ASD的DL方法已经专注于基于神经影像动物的方法。神经成像技术是无侵入性疾病标志物,可能对ASD诊断有用。结构和功能神经影像技术提供了关于大脑的结构(解剖结构和结构连接)和功能(活性和功能连接)的实质性信息。由于大脑的复杂结构和功能,提出了在不利用像DL这样的强大AI技术的情况下使用神经影像数据进行ASD诊断的最佳程序可能是具有挑战性的。本文研究了借助DL网络进行以区分ASD进行的研究。还评估了用于支持ASD患者的康复工具,用于利用DL网络的支持患者。最后,我们将在ASD的自动检测和康复中提出重要挑战,并提出了一些未来的作品。
translated by 谷歌翻译
Investigation and analysis of patient outcomes, including in-hospital mortality and length of stay, are crucial for assisting clinicians in determining a patient's result at the outset of their hospitalization and for assisting hospitals in allocating their resources. This paper proposes an approach based on combining the well-known gray wolf algorithm with frequent items extracted by association rule mining algorithms. First, original features are combined with the discriminative extracted frequent items. The best subset of these features is then chosen, and the parameters of the used classification algorithms are also adjusted, using the gray wolf algorithm. This framework was evaluated using a real dataset made up of 2816 patients from the Imam Ali Kermanshah Hospital in Iran. The study's findings indicate that low Ejection Fraction, old age, high CPK values, and high Creatinine levels are the main contributors to patients' mortality. Several significant and interesting rules related to mortality in hospitals and length of stay have also been extracted and presented. Additionally, the accuracy, sensitivity, specificity, and auroc of the proposed framework for the diagnosis of mortality in the hospital using the SVM classifier were 0.9961, 0.9477, 0.9992, and 0.9734, respectively. According to the framework's findings, adding frequent items as features considerably improves classification accuracy.
translated by 谷歌翻译
The spread of misinformation is a prominent problem in today's society, and many researchers in academia and industry are trying to combat it. Due to the vast amount of misinformation that is created every day, it is unrealistic to leave this task to human fact-checkers. Data scientists and researchers have been working on automated misinformation detection for years, and it is still a challenging problem today. The goal of our research is to add a new level to automated misinformation detection; classifying segments of text with persuasive writing techniques in order to produce interpretable reasoning for why an article can be marked as misinformation. To accomplish this, we present a novel annotation scheme containing many common persuasive writing tactics, along with a dataset with human annotations accordingly. For this task, we make use of a RoBERTa model for text classification, due to its high performance in NLP. We develop several language model-based baselines and present the results of our persuasive strategy label predictions as well as the improvements these intermediate labels make in detecting misinformation and producing interpretable results.
translated by 谷歌翻译
Developing robust and fair AI systems require datasets with comprehensive set of labels that can help ensure the validity and legitimacy of relevant measurements. Recent efforts, therefore, focus on collecting person-related datasets that have carefully selected labels, including sensitive characteristics, and consent forms in place to use those attributes for model testing and development. Responsible data collection involves several stages, including but not limited to determining use-case scenarios, selecting categories (annotations) such that the data are fit for the purpose of measuring algorithmic bias for subgroups and most importantly ensure that the selected categories/subcategories are robust to regional diversities and inclusive of as many subgroups as possible. Meta, in a continuation of our efforts to measure AI algorithmic bias and robustness (https://ai.facebook.com/blog/shedding-light-on-fairness-in-ai-with-a-new-data-set), is working on collecting a large consent-driven dataset with a comprehensive list of categories. This paper describes our proposed design of such categories and subcategories for Casual Conversations v2.
translated by 谷歌翻译
This work explores the zero-shot compositional learning ability of large pre-trained vision-language models(VLMs) within the prompt-based learning framework and propose a model (\textit{PromptCompVL}) to solve the compositonal zero-shot learning (CZSL) problem. \textit{PromptCompVL} makes two design choices: first, it uses a soft-prompting instead of hard-prompting to inject learnable parameters to reprogram VLMs for compositional learning. Second, to address the compositional challenge, it uses the soft-embedding layer to learn primitive concepts in different combinations. By combining both soft-embedding and soft-prompting, \textit{PromptCompVL} achieves state-of-the-art performance on the MIT-States dataset. Furthermore, our proposed model achieves consistent improvement compared to other CLIP-based methods which shows the effectiveness of the proposed prompting strategies for CZSL.
translated by 谷歌翻译
Recent research shows synthetic data as a source of supervision helps pretrained language models (PLM) transfer learning to new target tasks/domains. However, this idea is less explored for spatial language. We provide two new data resources on multiple spatial language processing tasks. The first dataset is synthesized for transfer learning on spatial question answering (SQA) and spatial role labeling (SpRL). Compared to previous SQA datasets, we include a larger variety of spatial relation types and spatial expressions. Our data generation process is easily extendable with new spatial expression lexicons. The second one is a real-world SQA dataset with human-generated questions built on an existing corpus with SPRL annotations. This dataset can be used to evaluate spatial language processing models in realistic situations. We show pretraining with automatically generated data significantly improves the SOTA results on several SQA and SPRL benchmarks, particularly when the training data in the target domain is small.
translated by 谷歌翻译